1,424 research outputs found

    Optimization of the CMS software build and distribution system

    Get PDF
    CMS software consists of over two million lines of code actively developed by hundreds of developers from all around the world. Optimal build, release and distribution of such a large-scale system for production and analysis activities for hundreds of sites and multiple platforms are quite a challenge. Its dependency on more than one hundred external tools makes its build and distribution more complex. We describe how parallel building of the software and minimalizing the size of the distribution dramatically reduced the time gap between software build and installation on remote sites, and how producing few big binary products, instead of thousands small ones, helped finding out some integration and runtime issues of the software

    CMS Monte Carlo production in the WLCG computing Grid

    Get PDF
    Monte Carlo production in CMS has received a major boost in performance and scale since the past CHEP06 conference. The production system has been re-engineered in order to incorporate the experience gained in running the previous system and to integrate production with the new CMS event data model, data management system and data processing framework. The system is interfaced to the two major computing Grids used by CMS, the LHC Computing Grid (LCG) and the Open Science Grid (OSG). Operational experience and integration aspects of the new CMS Monte Carlo production system is presented together with an analysis of production statistics. The new system automatically handles job submission, resource monitoring, job queuing, job distribution according to the available resources, data merging, registration of data into the data bookkeeping, data location, data transfer and placement systems. Compared to the previous production system automation, reliability and performance have been considerably improved. A more efficient use of computing resources and a better handling of the inherent Grid unreliability have resulted in an increase of production scale by about an order of magnitude, capable of running in parallel at the order of ten thousand jobs and yielding more than two million events per day

    The CMS Monte Carlo Production System: Development and Design

    Get PDF
    The CMS production system has undergone a major architectural upgrade from its predecessor, with the goal of reducing the operational manpower needed and preparing for the large scale production required by the CMS physics plan. The new production system is a tiered architecture that facilitates robust and distributed production request processing and takes advantage of the multiple Grid and farm resources available to the CMS experiment

    Distributed Computing Grid Experiences in CMS

    Get PDF
    The CMS experiment is currently developing a computing system capable of serving, processing and archiving the large number of events that will be generated when the CMS detector starts taking data. During 2004 CMS undertook a large scale data challenge to demonstrate the ability of the CMS computing system to cope with a sustained data-taking rate equivalent to 25% of startup rate. Its goals were: to run CMS event reconstruction at CERN for a sustained period at 25 Hz input rate; to distribute the data to several regional centers; and enable data access at those centers for analysis. Grid middleware was utilized to help complete all aspects of the challenge. To continue to provide scalable access from anywhere in the world to the data, CMS is developing a layer of software that uses Grid tools to gain access to data and resources, and that aims to provide physicists with a user friendly interface for submitting their analysis jobs. This paper describes the data challenge experience with Grid infrastructure and the current development of the CMS analysis system

    Performance of the CMS Cathode Strip Chambers with Cosmic Rays

    Get PDF
    The Cathode Strip Chambers (CSCs) constitute the primary muon tracking device in the CMS endcaps. Their performance has been evaluated using data taken during a cosmic ray run in fall 2008. Measured noise levels are low, with the number of noisy channels well below 1%. Coordinate resolution was measured for all types of chambers, and fall in the range 47 microns to 243 microns. The efficiencies for local charged track triggers, for hit and for segments reconstruction were measured, and are above 99%. The timing resolution per layer is approximately 5 ns

    Performance and Operation of the CMS Electromagnetic Calorimeter

    Get PDF
    The operation and general performance of the CMS electromagnetic calorimeter using cosmic-ray muons are described. These muons were recorded after the closure of the CMS detector in late 2008. The calorimeter is made of lead tungstate crystals and the overall status of the 75848 channels corresponding to the barrel and endcap detectors is reported. The stability of crucial operational parameters, such as high voltage, temperature and electronic noise, is summarised and the performance of the light monitoring system is presented

    CMS physics technical design report : Addendum on high density QCD with heavy ions

    Get PDF
    Peer reviewe

    Calibration of the CMS Drift Tube Chambers and Measurement of the Drift Velocity with Cosmic Rays

    Get PDF
    Peer reviewe

    CMS Data Processing Workflows during an Extended Cosmic Ray Run

    Get PDF
    Peer reviewe

    Aligning the CMS Muon Chambers with the Muon Alignment System during an Extended Cosmic Ray Run

    Get PDF
    Peer reviewe
    • …
    corecore